Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(tee): fix race condition in batch locking #3342

Merged
merged 10 commits into from
Dec 3, 2024

Conversation

pbeza
Copy link
Collaborator

@pbeza pbeza commented Nov 28, 2024

What ❔

After scaling zksync-tee-prover to two instances/replicas on Azure for azure-stage2, azure-testnet2, and azure-mainnet2, we started experiencing duplicated proving for some batches.
logs
While this is not an erroneous situation, it is wasteful from a resource perspective. This was due to a race condition in batch locking. This PR fixes the issue by adding atomic batch locking.

Why ❔

To fix the bug that only activates after running zksync-tee-prover on multiple instances.

Checklist

  • PR title corresponds to the body of PR (we generate changelog entries from PRs).
  • Tests for the changes have been added / updated.
  • Documentation comments have been added / updated.
  • Code has been formatted via zkstack dev fmt and zkstack dev lint.

After [scaling][1] [zksync-tee-prover][2] to two instances/replicas on
Azure for azure-stage2, azure-testnet2, and azure-mainnet2, we started
experiencing [duplicated proving for some batches][3]. While this is not
an erroneous situation, it is wasteful from a resource perspective. This
was due to a race condition in batch locking. This PR fixes the issue by
adding atomic batch locking.

[1]: https://github.com/matter-labs/gitops-kubernetes/pull/7033/files
[2]: https://github.com/matter-labs/zksync-era/blob/aaca32b6ab411d5cdc1234c20af8b5c1092195d7/core/bin/zksync_tee_prover/src/main.rs
[3]: https://grafana.matterlabs.dev/goto/M1I_Bq7HR?orgId=1
@pbeza pbeza force-pushed the tee/fix/atomic-batch-locking branch from 46dcfde to 7d96c1c Compare November 29, 2024 12:10
Copy link
Contributor

@slowli slowli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dumb question: How is the locking made atomic in this PR? AFAIU, the first SELECT statement, if queried concurrently, can still return the same L1 batch number unless some kind of row-level locking is implemented (cf. SELECT FOR UPDATE SKIP LOCKED in this contract verifier query). I'm not even sure the UPDATE query will fail for the transaction committed last in case of a race (maybe it would with serialization isolation level, but I'd argue that erroring is not the best cause of action here; row-level locks seem to work better).

core/lib/dal/src/models/storage_tee_proof.rs Outdated Show resolved Hide resolved
core/lib/dal/src/tee_proof_generation_dal.rs Outdated Show resolved Hide resolved
@pbeza
Copy link
Collaborator Author

pbeza commented Nov 29, 2024

Dumb question: How is the locking made atomic in this PR? (...)

Not a dumb question at all! The dumb one here was me! ;P I totally misunderstood what SQL transactions can actually handle in this context. Had to brush up on the finer details of SQL locking. Thanks for steering me in the right direction! These two links were super helpful:

@pbeza
Copy link
Collaborator Author

pbeza commented Nov 29, 2024

@slowli, I’ve addressed your code review comments. Take a look when you get a chance.

It’s kinda hard to test properly without deploying it to stage and letting it run for a while. Specifically, let me know if locking rows in the proof_generation_details table is okay (instead of just locking tee_proof_generation_details rows).

@pbeza pbeza requested a review from slowli November 29, 2024 19:00
slowli
slowli previously approved these changes Dec 2, 2024
@pbeza pbeza requested a review from slowli December 3, 2024 12:19
slowli
slowli previously approved these changes Dec 3, 2024
@pbeza
Copy link
Collaborator Author

pbeza commented Dec 3, 2024

@slowli, @haraldh suggested locking the entire tee_proof_generation_details table to keep things simpler. He also raised a concern that if one TEE prover locks the batch, a second TEE prover instance will just get a no job response instead of waiting for new batches to become available.

Let me know if this more fine-grained locking approach still works for you, or if we’re missing something – or maybe there’s an easier way we haven’t considered.

@pbeza pbeza requested a review from slowli December 3, 2024 13:38
@haraldh haraldh added this pull request to the merge queue Dec 3, 2024
Merged via the queue into main with commit a7dc0ed Dec 3, 2024
32 checks passed
@haraldh haraldh deleted the tee/fix/atomic-batch-locking branch December 3, 2024 17:15
pbeza added a commit that referenced this pull request Dec 4, 2024
Commit a7dc0ed (PR #3342) was supposed
to fix a race condition in batch locking by introducing SQL row-locking,
but it didn't work as expected. Now we are switching back to
coarser-grained table-level locking as [originally suggested][1] by
Harald. The original fix was hard to test unless deployed to `stage` due
to the undeterministic nature of the problem, so we needed to merge it
to the `main` branch to properly test it.

[1]: #3342 (comment)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants